Pub Date : 2023-03-18DOI: 10.25139/inform.v8i2.5128
Rini Agustina, Endah Andayani, Imron Sya'roni, Della Rulita Nurfaizana, D. Suprianto
Blended learning is needed as a learning medium that can be used online, offline, asynchronously, or synchronously. The use of blended learning in entrepreneurship programs in vocational schools is intended to prevent learning loss in learning during the pandemic and post-pandemic. This study evaluates the results of developing web-based vocational media used as blended learning using the Technology Acceptance Model (TAM) measurement criteria. The measurement criteria include aspects of Perceived Usefulness (TPU) and aspects of Perceived Ease of Use (TPE), each of which has indicators of functionality (TFL), accessibility (TAC), and Computer Playfulness (TCP), which are then accumulated in the Behaviour Intention aspect (TBI). This evaluation study was analyzed using SEM (AMOS) and SPSS. A total of 121 class, XI SMK students were involved in collecting data in this research. Data was taken using a questionnaire consisting of 19 questions. The estimation results show that every aspect of TAM contributes quite well regarding vocationalogy media users. The evaluation results showed that 86.85% of users felt helped, liked, and found it easy when learning to use the vocationalogy media.
{"title":"Blended Learning Vocationalogy Entrepreneurship Program: Analysis of Human-Computer Interaction Based on Technology Acceptance Model (TAM)","authors":"Rini Agustina, Endah Andayani, Imron Sya'roni, Della Rulita Nurfaizana, D. Suprianto","doi":"10.25139/inform.v8i2.5128","DOIUrl":"https://doi.org/10.25139/inform.v8i2.5128","url":null,"abstract":"Blended learning is needed as a learning medium that can be used online, offline, asynchronously, or synchronously. The use of blended learning in entrepreneurship programs in vocational schools is intended to prevent learning loss in learning during the pandemic and post-pandemic. This study evaluates the results of developing web-based vocational media used as blended learning using the Technology Acceptance Model (TAM) measurement criteria. The measurement criteria include aspects of Perceived Usefulness (TPU) and aspects of Perceived Ease of Use (TPE), each of which has indicators of functionality (TFL), accessibility (TAC), and Computer Playfulness (TCP), which are then accumulated in the Behaviour Intention aspect (TBI). This evaluation study was analyzed using SEM (AMOS) and SPSS. A total of 121 class, XI SMK students were involved in collecting data in this research. Data was taken using a questionnaire consisting of 19 questions. The estimation results show that every aspect of TAM contributes quite well regarding vocationalogy media users. The evaluation results showed that 86.85% of users felt helped, liked, and found it easy when learning to use the vocationalogy media. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79174458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-18DOI: 10.25139/inform.v8i2.5700
D. D. Nur Cahyo, F. Farasalsabila, Verra Budhi Lestari, Hanafi, Tutik Lestari, Fahmi Rusdi Al Islami, M. A. Maulana
Many researchers currently employ supervised, machine learning methods to study sentiment analysis. Analysis can be done on movie reviews, Twitter reviews, online product reviews, blogs, discussion forums, Myspace comments, and social networks. Support Vector Machines (SVM) classifiers are used to analyze the Twitter data set using different parameters. The analysis and discussion were undertaken to allow for the conclusion that SVM has been successfully implemented utilizing the IMDb data for this study (Support Vector Machine). To complete this study, the preprocessing phase, which consisted of filtering and classifying data using SVM with a total of 50.000 data points, was completed after collecting up to 40.000 reviews to use as training data and 10.000 reviews to use as testing data. 25.000 positive and 25.000 negative points make up the view. In this study, we adopted an evaluation matrix including accurate, precision, recall, and F1-score. According to the experiment report, our model achieved SVM with Bags of Word (BoW) used to get results for the highest accuracy test, which was 88,59% accurate. Then, using grid-search, optimize against the SVM parameters to find the best parameters that SVM models can use. Our model achieved Term Frequency–inverse Document Frequency (TF-IDF) was used to get results for the highest accuracy test, which was 91,27% accurate.
{"title":"Sentiment Analysis for IMDb Movie Review Using Support Vector Machine (SVM) Method","authors":"D. D. Nur Cahyo, F. Farasalsabila, Verra Budhi Lestari, Hanafi, Tutik Lestari, Fahmi Rusdi Al Islami, M. A. Maulana","doi":"10.25139/inform.v8i2.5700","DOIUrl":"https://doi.org/10.25139/inform.v8i2.5700","url":null,"abstract":"Many researchers currently employ supervised, machine learning methods to study sentiment analysis. Analysis can be done on movie reviews, Twitter reviews, online product reviews, blogs, discussion forums, Myspace comments, and social networks. Support Vector Machines (SVM) classifiers are used to analyze the Twitter data set using different parameters. The analysis and discussion were undertaken to allow for the conclusion that SVM has been successfully implemented utilizing the IMDb data for this study (Support Vector Machine). To complete this study, the preprocessing phase, which consisted of filtering and classifying data using SVM with a total of 50.000 data points, was completed after collecting up to 40.000 reviews to use as training data and 10.000 reviews to use as testing data. 25.000 positive and 25.000 negative points make up the view. In this study, we adopted an evaluation matrix including accurate, precision, recall, and F1-score. According to the experiment report, our model achieved SVM with Bags of Word (BoW) used to get results for the highest accuracy test, which was 88,59% accurate. Then, using grid-search, optimize against the SVM parameters to find the best parameters that SVM models can use. Our model achieved Term Frequency–inverse Document Frequency (TF-IDF) was used to get results for the highest accuracy test, which was 91,27% accurate. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79830068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-31DOI: 10.25139/inform.v8i1.5760
A. Fahruzi, Adam Yuda Wardaya, Andy Suryowinoto
Brake pad components are important in two-wheeled vehicles because they concern the driver's and others' safety. Brake lining wear is an unavoidable phenomenon. This is because of the concept of braking, which involves bringing two things into contact with each other such that they press against each other and rub against each other. Brake pads that have not been replaced make the brakes unable to work normally, so the potential for accidents is even greater. One of the factors causing the problem is negligence and ignorance of the condition of the ream linings, which should be time for the change. This paper proposes a tool that can estimate the condition of the brake pads based on the level of wear in real-time using the fuzzy logic method. Fuzzy logic will estimate the degree of wear of brake pads based on speed, brake fluid pressure, and braking duration parameters. The type of brake used in this paper is the type of disk brake used on two-wheeled vehicles. The test is not carried out or applied to two-wheeled vehicles but is applied to brake pad wear test equipment that works like a two-wheeled vehicle. Based on the test results, the fuzzy logic implanted into the Arduino microcontroller can provide information on the estimated condition of the brake pads on LCDs in real time based on fuzzy set datasets obtained through experimental tests. Based on the experimental results, the brake lining wear test was carried out for 30 minutes with a pressure of 10 and 17 psi. The results showed that the thickness of the brake linings decreased by around 21.66% and 26.68%, respectively.
{"title":"Estimation of Brake Pad Wear Using Fuzzy Logic in Real Time","authors":"A. Fahruzi, Adam Yuda Wardaya, Andy Suryowinoto","doi":"10.25139/inform.v8i1.5760","DOIUrl":"https://doi.org/10.25139/inform.v8i1.5760","url":null,"abstract":"Brake pad components are important in two-wheeled vehicles because they concern the driver's and others' safety. Brake lining wear is an unavoidable phenomenon. This is because of the concept of braking, which involves bringing two things into contact with each other such that they press against each other and rub against each other. Brake pads that have not been replaced make the brakes unable to work normally, so the potential for accidents is even greater. One of the factors causing the problem is negligence and ignorance of the condition of the ream linings, which should be time for the change. This paper proposes a tool that can estimate the condition of the brake pads based on the level of wear in real-time using the fuzzy logic method. Fuzzy logic will estimate the degree of wear of brake pads based on speed, brake fluid pressure, and braking duration parameters. The type of brake used in this paper is the type of disk brake used on two-wheeled vehicles. The test is not carried out or applied to two-wheeled vehicles but is applied to brake pad wear test equipment that works like a two-wheeled vehicle. Based on the test results, the fuzzy logic implanted into the Arduino microcontroller can provide information on the estimated condition of the brake pads on LCDs in real time based on fuzzy set datasets obtained through experimental tests. Based on the experimental results, the brake lining wear test was carried out for 30 minutes with a pressure of 10 and 17 psi. The results showed that the thickness of the brake linings decreased by around 21.66% and 26.68%, respectively. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77795039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-30DOI: 10.25139/inform.v8i1.5685
The application of innovative technologies in the agricultural industry has the potential to boost yield productivity and affect the well-being of farmers. Pistachio nuts are widely considered among the most precious things agriculture produces. The kirmizi and sirt are the two distinct varieties of pistachio nuts that are available. It is essential to categorize the different types of pistachio nuts to keep the product's quality and worth at a high level. This paper proposes a classified pistachio variety of kirmizi and siirt based on Convolutional Neural Network (CNN) models Inception V3 and ResNet50. The dataset used in this research is 2148 samples of pistachio images. The sample images are divided into 80% training data, 10% testing data, and 10% validation data. First, we pre-process and normalize by wrapping and cropping the images. The next, Inception-V3 and ResNet50 architectures, were trained and tested on the sample datasets. The experimental results show that the accuracy of both models is 96% and 86%, respectively. This can be concluded that the performance of the CNN model using Inception-V3 architecture outperforms ResNet50 architecture.
{"title":"Classification of Pistachio Nut Using Convolutional Neural Network","authors":"","doi":"10.25139/inform.v8i1.5685","DOIUrl":"https://doi.org/10.25139/inform.v8i1.5685","url":null,"abstract":"The application of innovative technologies in the agricultural industry has the potential to boost yield productivity and affect the well-being of farmers. Pistachio nuts are widely considered among the most precious things agriculture produces. The kirmizi and sirt are the two distinct varieties of pistachio nuts that are available. It is essential to categorize the different types of pistachio nuts to keep the product's quality and worth at a high level. This paper proposes a classified pistachio variety of kirmizi and siirt based on Convolutional Neural Network (CNN) models Inception V3 and ResNet50. The dataset used in this research is 2148 samples of pistachio images. The sample images are divided into 80% training data, 10% testing data, and 10% validation data. First, we pre-process and normalize by wrapping and cropping the images. The next, Inception-V3 and ResNet50 architectures, were trained and tested on the sample datasets. The experimental results show that the accuracy of both models is 96% and 86%, respectively. This can be concluded that the performance of the CNN model using Inception-V3 architecture outperforms ResNet50 architecture. \u0000 \u0000 \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87858684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-30DOI: 10.25139/inform.v8i1.4758
Anindo Saka Fitri, Eli Nurhayati, Nadilla Anidew, A. Pratita, Syifa’ Saskia Elfaretta
Kedai Lengghian is one of the culinary businesses that has not applied technology to support its business processes. In the era of globalization, information technology is developing rapidly, allowing for business development. The development of information technology has an impact on the tight competition in the culinary business. Researchers provide solutions so that the managers and employees of Kedai Lengghian can increase the effectiveness and efficiency of business processes there. This is done by analyzing and designing a website-based information system that can later help implement system creation. Before creating an information system that suits the needs of its users, it is necessary to analyze and design software. Researchers use the Iconix Process method because the concept of building a system that is run focuses on the needs of its users. The Iconix Process has four stages: requirement, analysis, preliminary design, detailed design, and implementation. Kedai Lenghian can use information system technology to become a reference at the level of system implementation. In addition, this website-based information system is expected to increase the effectiveness and efficiency of the store's business processes and become an attraction for Kedai Lengghian consumers. The result of Website Analysis and Design Using the Iconix process is an object-oriented design that can then be coded. Also, its produced UML design gives Kedai Lengghian a picture of website making based on user needs, system needs, and system design.
{"title":"Website Analysis and Design Using Iconix Process Method: Case Study: Kedai Lengghian","authors":"Anindo Saka Fitri, Eli Nurhayati, Nadilla Anidew, A. Pratita, Syifa’ Saskia Elfaretta","doi":"10.25139/inform.v8i1.4758","DOIUrl":"https://doi.org/10.25139/inform.v8i1.4758","url":null,"abstract":"Kedai Lengghian is one of the culinary businesses that has not applied technology to support its business processes. In the era of globalization, information technology is developing rapidly, allowing for business development. The development of information technology has an impact on the tight competition in the culinary business. Researchers provide solutions so that the managers and employees of Kedai Lengghian can increase the effectiveness and efficiency of business processes there. This is done by analyzing and designing a website-based information system that can later help implement system creation. Before creating an information system that suits the needs of its users, it is necessary to analyze and design software. Researchers use the Iconix Process method because the concept of building a system that is run focuses on the needs of its users. The Iconix Process has four stages: requirement, analysis, preliminary design, detailed design, and implementation. Kedai Lenghian can use information system technology to become a reference at the level of system implementation. In addition, this website-based information system is expected to increase the effectiveness and efficiency of the store's business processes and become an attraction for Kedai Lengghian consumers. The result of Website Analysis and Design Using the Iconix process is an object-oriented design that can then be coded. Also, its produced UML design gives Kedai Lengghian a picture of website making based on user needs, system needs, and system design. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"9 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72451115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-29DOI: 10.25139/inform.v8i1.5569
Natalinda Pamungkas, E. Udayanti, B. Indriyono, Wildan Mahmud, Ery Mintorini, Arika Norma Wahyu Dorroty, Sanina Quamila Putri
The existence of information is undeniably needed by many people. This statement describes the increasing importance of information and the corresponding increase in the need for access to relevant documents and literature. The contents of the information derived from these documents are then sorted to make their meaning more understandable. This sorting process is known as stemming. Stemming is a process that is widely applied in basic word searches. Separating meaningless words can make information clearer. It is necessary to pay attention to the appropriate stemming algorithm according to the language used. Many stemming algorithms can be used to perform this basic word search process. Some of them are the Tala and Nazief Adriani algorithms. The two algorithms have differences in their work processes. The Tala algorithm adopts a rule-based Porter algorithm, while the Nazief & Adriani algorithm works based on a dictionary. The two algorithms have their respective advantages in terms of accuracy and speed. Therefore, in this study, an analysis will be carried out by comparing the performance of the two algorithms in the Indonesian language text-stemming process. The trial process uses several different data sources to measure the speed and accuracy of each algorithm. Data sources used in this study included abstracts of student thesis reports or final assignments of 30 students and information from online news as many as 200. From the results of the tests that have been carried out, it can be concluded that the Tala stemming algorithm has a lower accuracy level than Nazief Adriani. The Tala algorithm only has an average accuracy of 65.29%, while Nazief Adriani has an accuracy of 78.47%. Regarding speed, the Tala algorithm has a better speed than Nazief Adriani at 32.19 seconds and Nazief & Adriani at 65.2 seconds.
{"title":"Comparison of Stemming Test Results of Tala Algorithms with Nazief Adriani in Abstract Documents and National News","authors":"Natalinda Pamungkas, E. Udayanti, B. Indriyono, Wildan Mahmud, Ery Mintorini, Arika Norma Wahyu Dorroty, Sanina Quamila Putri","doi":"10.25139/inform.v8i1.5569","DOIUrl":"https://doi.org/10.25139/inform.v8i1.5569","url":null,"abstract":"The existence of information is undeniably needed by many people. This statement describes the increasing importance of information and the corresponding increase in the need for access to relevant documents and literature. The contents of the information derived from these documents are then sorted to make their meaning more understandable. This sorting process is known as stemming. Stemming is a process that is widely applied in basic word searches. Separating meaningless words can make information clearer. It is necessary to pay attention to the appropriate stemming algorithm according to the language used. Many stemming algorithms can be used to perform this basic word search process. Some of them are the Tala and Nazief Adriani algorithms. The two algorithms have differences in their work processes. The Tala algorithm adopts a rule-based Porter algorithm, while the Nazief & Adriani algorithm works based on a dictionary. The two algorithms have their respective advantages in terms of accuracy and speed. Therefore, in this study, an analysis will be carried out by comparing the performance of the two algorithms in the Indonesian language text-stemming process. The trial process uses several different data sources to measure the speed and accuracy of each algorithm. Data sources used in this study included abstracts of student thesis reports or final assignments of 30 students and information from online news as many as 200. From the results of the tests that have been carried out, it can be concluded that the Tala stemming algorithm has a lower accuracy level than Nazief Adriani. The Tala algorithm only has an average accuracy of 65.29%, while Nazief Adriani has an accuracy of 78.47%. Regarding speed, the Tala algorithm has a better speed than Nazief Adriani at 32.19 seconds and Nazief & Adriani at 65.2 seconds. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87044895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-28DOI: 10.25139/inform.v8i1.5667
Nabila Valinka Pusean, N. Charibaldi, B. Santosa
Television shows need a rating in their assessment, but public opinion is also required to complete it. Sentiment analysis is necessary for its completion. An essential step in sentiment analysis is pre-processing because, in public opinion, there are still many inappropriate writings. This study aims to compare the performance results using different pre-processing scenarios to get the best pre-processing performance on Support Vector Machine (SVM) and Naïve Bayes (NB) on sentiment analysis about the television show X Factor Indonesia. The stages used to start from literature study, problem analysis, design, data collection, pre-processing with two scenarios, word weighting with TF-IDF, classification using SVM and NB, then resulting accuracy from Confusion Matrix. The findings of this research are that optimal performance can be achieved using a comprehensive pre-processing scenario. This scenario should include the following steps: case-folding, removing emoji, cleansing, removing repetition characters, word normalization, negation handling, stopwords removal, stemming, and tokenization, with an accuracy of 79.44% on the SVM algorithm. This research shows that the complete pre-processing of the SVM algorithm is better in terms of accuracy, precision, recall, and F1-score.
{"title":"Comparison of Scenario Pre-processing Performance on Support Vector Machine and Naïve Bayes Algorithms for Sentiment Analysis","authors":"Nabila Valinka Pusean, N. Charibaldi, B. Santosa","doi":"10.25139/inform.v8i1.5667","DOIUrl":"https://doi.org/10.25139/inform.v8i1.5667","url":null,"abstract":"Television shows need a rating in their assessment, but public opinion is also required to complete it. Sentiment analysis is necessary for its completion. An essential step in sentiment analysis is pre-processing because, in public opinion, there are still many inappropriate writings. This study aims to compare the performance results using different pre-processing scenarios to get the best pre-processing performance on Support Vector Machine (SVM) and Naïve Bayes (NB) on sentiment analysis about the television show X Factor Indonesia. The stages used to start from literature study, problem analysis, design, data collection, pre-processing with two scenarios, word weighting with TF-IDF, classification using SVM and NB, then resulting accuracy from Confusion Matrix. The findings of this research are that optimal performance can be achieved using a comprehensive pre-processing scenario. This scenario should include the following steps: case-folding, removing emoji, cleansing, removing repetition characters, word normalization, negation handling, stopwords removal, stemming, and tokenization, with an accuracy of 79.44% on the SVM algorithm. This research shows that the complete pre-processing of the SVM algorithm is better in terms of accuracy, precision, recall, and F1-score. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89200643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-27DOI: 10.25139/inform.v8i1.4519
Muhammad Al Ghifari, Wahyuningdiah Trisari Harsanti Putri
Students who have taken courses will receive grades from a performance index with a weight of 0 to 4. The amount of historical student data, particularly on course grades, has the potential to discover new insights. Still, course grades are closed data and are only for academic and management purposes. The research aims to a grouping of courses with high average grades. In this research, the clustering of courses using the k-means clustering algorithm using the elbow method to determine the centroid. Based on the Sum of Squares calculation, the optimal number of clusters with k=2 was obtained. The clustering results produced cluster 1 with a centroid value of 2.686 and 15 members and cluster 2 with a centroid value of 3.245 and 40 members. It can be concluded from this research that the members of cluster 2 are a group of courses with high average grades.
{"title":"Clustering Courses Based On Student Grades Using K-Means Algorithm With Elbow Method For Centroid Determination","authors":"Muhammad Al Ghifari, Wahyuningdiah Trisari Harsanti Putri","doi":"10.25139/inform.v8i1.4519","DOIUrl":"https://doi.org/10.25139/inform.v8i1.4519","url":null,"abstract":"Students who have taken courses will receive grades from a performance index with a weight of 0 to 4. The amount of historical student data, particularly on course grades, has the potential to discover new insights. Still, course grades are closed data and are only for academic and management purposes. The research aims to a grouping of courses with high average grades. In this research, the clustering of courses using the k-means clustering algorithm using the elbow method to determine the centroid. Based on the Sum of Squares calculation, the optimal number of clusters with k=2 was obtained. The clustering results produced cluster 1 with a centroid value of 2.686 and 15 members and cluster 2 with a centroid value of 3.245 and 40 members. It can be concluded from this research that the members of cluster 2 are a group of courses with high average grades. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75539565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-26DOI: 10.25139/inform.v8i1.5684
M.Ilham Arief, Kusrini Kusrini, Tonny Hidayat
The marker-based tracking method is a method that utilizes markers, while the markerless-based tracking method is a method that does not use markers in making AR. In the markerless-based tracking method, there is a face tracker technique. In previous research, no one has discussed the comparison of effectiveness concerning the success and accuracy of using the face tracker technique. Therefore, this study aims to test the effectiveness of the accuracy and accuracy of success with applying the markerless-based tracking method, the face tracker technique, in detecting facial movements. in 2D and 3D AR with light intensity test parameters of 20 Lux, 40 Lux, and 60 Lux with WRGB light color, Face angle position of 30o and 60o, and face distance from camera 50 cm, 100cm, and 150cm. The results of comparison of superior success accuracy are at a distance of 50 cm; with an accuracy rate for 2D AR of 93.22% and 96.63% for 3D. It was concluded that the face tracker technique's markerless-based tracking method works optimally in 3D compared to 2D. This research finds an attractiveness score of 1.865, a perception score of 1.683, an efficiency score of 1.550, a dependability score of 1.638, a stimulation score of 1.500, and a novelty score of 1.013. Quality with an attractiveness value of 1.68, pragmatic quality of 1.56, and hedonic quality of 1.26. This study concludes that 2D and 3D AR face detection positively evaluates user experience and quality.
{"title":"Analysis of Markerless-Based Tracking Methods of Face Tracker Techniques in Detecting Human Face Movements in 2D And 3D Filter Making","authors":"M.Ilham Arief, Kusrini Kusrini, Tonny Hidayat","doi":"10.25139/inform.v8i1.5684","DOIUrl":"https://doi.org/10.25139/inform.v8i1.5684","url":null,"abstract":"The marker-based tracking method is a method that utilizes markers, while the markerless-based tracking method is a method that does not use markers in making AR. In the markerless-based tracking method, there is a face tracker technique. In previous research, no one has discussed the comparison of effectiveness concerning the success and accuracy of using the face tracker technique. Therefore, this study aims to test the effectiveness of the accuracy and accuracy of success with applying the markerless-based tracking method, the face tracker technique, in detecting facial movements. in 2D and 3D AR with light intensity test parameters of 20 Lux, 40 Lux, and 60 Lux with WRGB light color, Face angle position of 30o and 60o, and face distance from camera 50 cm, 100cm, and 150cm. The results of comparison of superior success accuracy are at a distance of 50 cm; with an accuracy rate for 2D AR of 93.22% and 96.63% for 3D. It was concluded that the face tracker technique's markerless-based tracking method works optimally in 3D compared to 2D. This research finds an attractiveness score of 1.865, a perception score of 1.683, an efficiency score of 1.550, a dependability score of 1.638, a stimulation score of 1.500, and a novelty score of 1.013. Quality with an attractiveness value of 1.68, pragmatic quality of 1.56, and hedonic quality of 1.26. This study concludes that 2D and 3D AR face detection positively evaluates user experience and quality. \u0000 \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80903722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-25DOI: 10.25139/inform.v8i1.4478
Nakia Natassa, Lia Suci Rahmania, Dinda Khoirunnisa, Quintin Kurnia Dikara
The Covid-19 pandemic has dramatically changed our daily lives, with masks becoming essential to prevent transmission and checking body temperature upon entering public spaces becoming a new norm. The Internet of Things (IoT) technology can aid in implementing health protocols and reducing direct human contact. This research aims to examine and explore the use of IoT systems in minimizing or preventing the spread of Covid-19. This research utilizes a Systematic Literature Review (SLR) method to provide an overview of the topic. According to the findings of the study that was carried out using 15 different journals for review, it was discovered that the object used the most frequently in several research journals is a device for measuring body temperature. Furthermore, most research methods are prototypes, and Arduino microcontrollers are used as the primary component in most of these prototypes. The one strategy for using the internet of things (IoT) to control the spread of Covid-19 is to develop a body temperature monitoring detecting device that can lessen users' need for direct touch with one another.
{"title":"Utilization of the IoT System to Minimize the Spread of Covid-19: A Systematic Literature Review","authors":"Nakia Natassa, Lia Suci Rahmania, Dinda Khoirunnisa, Quintin Kurnia Dikara","doi":"10.25139/inform.v8i1.4478","DOIUrl":"https://doi.org/10.25139/inform.v8i1.4478","url":null,"abstract":"The Covid-19 pandemic has dramatically changed our daily lives, with masks becoming essential to prevent transmission and checking body temperature upon entering public spaces becoming a new norm. The Internet of Things (IoT) technology can aid in implementing health protocols and reducing direct human contact. This research aims to examine and explore the use of IoT systems in minimizing or preventing the spread of Covid-19. This research utilizes a Systematic Literature Review (SLR) method to provide an overview of the topic. According to the findings of the study that was carried out using 15 different journals for review, it was discovered that the object used the most frequently in several research journals is a device for measuring body temperature. Furthermore, most research methods are prototypes, and Arduino microcontrollers are used as the primary component in most of these prototypes. The one strategy for using the internet of things (IoT) to control the spread of Covid-19 is to develop a body temperature monitoring detecting device that can lessen users' need for direct touch with one another. \u0000 ","PeriodicalId":52760,"journal":{"name":"Inform Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi","volume":"135 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89236519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}